SDCA without Duality, Regularization, and Individual Convexity
نویسنده
چکیده
Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. We describe variants of SDCA that do not require explicit regularization and do not rely on duality. We prove linear convergence rates even if individual loss functions are non-convex, as long as the expected loss is strongly convex.
منابع مشابه
Linear convergence of SDCA in statistical estimation
In this paper, we consider stochastic dual coordinate (SDCA) without strongly convex assumption or convex assumption. We show that SDCA converges linearly under mild conditions termed restricted strong convexity. This covers a wide array of popular statistical models including Lasso, group Lasso, and logistic regression with l1 regularization, corrected Lasso and linear regression with SCAD reg...
متن کاملLinear Convergence of the Randomized Feasible Descent Method Under the Weak Strong Convexity Assumption
In this paper we generalize the framework of the feasible descent method (FDM) to a randomized (R-FDM) and a coordinate-wise random feasible descent method (RC-FDM) framework. We show that the famous SDCA algorithm for optimizing the SVM dual problem, or the stochastic coordinate descent method for the LASSO problem, fits into the framework of RC-FDM. We prove linear convergence for both R-FDM ...
متن کاملSDCA without Duality
Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. In this paper we show how a variant of SDCA can be applied for non-convex losses. We prove linear convergence rate even if individual loss functions are non-convex as long as the expected loss is convex.
متن کاملLinear Convergence of Randomized Feasible Descent Methods Under the Weak Strong Convexity Assumption
In this paper we generalize the framework of the Feasible Descent Method (FDM) to a Randomized (R-FDM) and a Randomized Coordinate-wise Feasible Descent Method (RCFDM) framework. We show that many machine learning algorithms, including the famous SDCA algorithm for optimizing the SVM dual problem, or the stochastic coordinate descent method for the LASSO problem, fits into the framework of RC-F...
متن کاملA Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
We develop a simple routine unifying the analysis of several important recently-developed stochastic optimization methods including SAGA, Finito, and stochastic dual coordinate ascent (SDCA). First, we show an intrinsic connection between stochastic optimization methods and dynamic jump systems, and propose a general jump system model for stochastic optimization methods. Our proposed model reco...
متن کامل